Implementing stochastic gradient descent

  • Define some linear function y = ax + b, choose some weights
  • Define an actual python function that calculates this
  • generate some random x as input
  • set up some initial guesses for a and b
  • set up cost function (mean sse) and loss function
  • find derivatives wrt y and x of cost function

In [139]:
a = 4
b = -2

In [140]:
import numpy as np

In [141]:
x = np.random.rand(10,1)

In [142]:
def error(y_guess): return sum(np.square(y_guess - original_function(x)))
def guess(x, guess_a, guess_b): return guess_a * x + guess_b
def lin(x,a,b): return a*x + b
y = lin(x,a,b)

We want to minimise the cost function. The cost function depends on two parameters, a and b. I have to use the derivatives of the cost with respect to these two variables.


In [145]:
def dydb(y_guess): return 2 * (y_guess - y)
def dyda(y_guess): return ((2* (y_guess - y))*x)
lr = 0.7
epochs = 10

In [146]:
guess_a = 1
guess_b = 1
for i in range(epochs):
    global guess_a, guess_b
    y_guess = lin(x, guess_a, guess_b)
#     print "Error is: " + str(error(y_guess))
    guess_a = guess_a - lr*dyda(y_guess).mean()
    guess_b = guess_b - lr*dydb(y_guess).mean()
#     print "guess_a is " + str(guess_a) + " and guess_b is " + str(guess_b) + "\r\n"
print "Error is: " + str(error(y_guess))
print "guess_a is " + str(guess_a) + " and guess_b is " + str(guess_b) + "\r\n"


Error is: [ 1.65540072]
guess_a is 3.11239836023 and guess_b is -1.36540938769


In [ ]: